skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Yarger, Lynette"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper investigates the implementation of AI-driven chatbots as a solution to streamline academic advising and improve the student experience. Through a review of preliminary results from the Nittany Advisor chatbot, we show how AI chatbots can boost advising efficiency, increase student satisfaction, and examine how chatbots can provide information on course requirements, prerequisites, and academic policies while suggesting the need for human intervention for more complex queries. We conclude that AI chatbots hold considerable promise for transforming academic advising by addressing routine questions, streamlining access to crucial information, and fostering a more responsive and supportive educational environment. 
    more » « less
    Free, publicly-accessible full text available February 18, 2026
  2. Technology firms increasingly leverage artificial intelligence (AI) to enhance human decision-making processes in the rapidly evolving talent acquisi- tion landscape. However, the ramifications of these advancements on workforce diversity remain a topic of intense debate. Drawing upon Gilliland’s procedu- ral justice framework, we explore how IT job candidates interpret the fairness of AI-driven recruitment systems. Gilliland’s model posits that an organization’s adherence to specific fairness principles, such as honesty and the opportunity to perform, profoundly shapes candidates’ self-perceptions, their judgments of the recruitment system’s equity, and the overall attractiveness of the organization. Using focus groups and interviews, we interacted with 47 women, Black and Lat- inx or Hispanic undergraduates specializing in computer and information science to discern how gender, race, and ethnicity influence attitudes toward AI in hir- ing. Three procedural justice rules, consistency of administration, job-relatedness, and selection information, emerged as critical in shaping participants’ fairness perceptions. Although discussed less frequently, the propriety of questions held significant resonance for Black and Latinx or Hispanic participants. Our study underscores the critical role of fairness evaluations for organizations, especially those striving to diversify the tech workforce. 
    more » « less
  3. The longstanding underrepresentation and attrition of minoritized racial and ethnic groups and women in computing courses, majors, and careers continues to plague researchers, educators, and policymakers alike. Informed by Sue and colleague’s microaggression framework and Rowe's microaffirmation framework, this study theorizes identity-related factors that undermine and support efforts to increase the representation and meaningful participation of minoritized racial and ethnic groups and women in computing education. We conclude with implications for teaching practices to advance equity, inclusion, and justice in computing education. 
    more » « less
  4. Sheila S. Jaswal, Amherst College (Ed.)
    STEM higher education in the U.S. has long been an uninviting space for minoritized individuals, particularly women, persons of color, and international students and scholars. In recent years, the contemporary realities of a global pandemic, sociopolitical divides, and heightened racial tensions, along with elevated levels of mental illness and emotional distress among college students, have intensified the need for an undergraduate STEM education cultureandclimate that recognizes and values the humanity of our students. The purpose of this article is to advance a more humanized undergraduate STEM education and to provide a framework to guide efforts toward achieving that vision. We argue that humanizing approaches recognize and value the complexity of individuals and the cultural capital that they bring to their education, and that this is particularly important for empowering minoritized students who are subordinated in status in STEM higher education. A STEM education that centers students’ humanity gives rise to equity and promotes human well-being and flourishing alongside knowledge acquisition and skill development. We then offer a guiding framework for conceptualizing the broader ecosystem in which undergraduate STEM students are embedded, and use it to outline the individual and collective roles that different stakeholders in the ecosystem can play in humanizing STEM education. 
    more » « less
  5. null (Ed.)
    Purpose The purpose of this paper is to offer a critical analysis of talent acquisition software and its potential for fostering equity in the hiring process for underrepresented IT professionals. The under-representation of women, African-American and Latinx professionals in the IT workforce is a longstanding issue that contributes to and is impacted by algorithmic bias. Design/methodology/approach Sources of algorithmic bias in talent acquisition software are presented. Feminist design thinking is presented as a theoretical lens for mitigating algorithmic bias. Findings Data are just one tool for recruiters to use; human expertise is still necessary. Even well-intentioned algorithms are not neutral and should be audited for morally and legally unacceptable decisions. Feminist design thinking provides a theoretical framework for considering equity in the hiring decisions made by talent acquisition systems and their users. Social implications This research implies that algorithms may serve to codify deep-seated biases, making IT work environments just as homogeneous as they are currently. If bias exists in talent acquisition software, the potential for propagating inequity and harm is far more significant and widespread due to the homogeneity of the specialists creating artificial intelligence (AI) systems. Originality/value This work uses equity as a central concept for considering algorithmic bias in talent acquisition. Feminist design thinking provides a framework for fostering a richer understanding of what fairness means and evaluating how AI software might impact marginalized populations. 
    more » « less